12 research outputs found

    Fully automated landmarking and facial segmentation on 3D photographs

    Full text link
    Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training and test dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.Comment: 13 pages, 4 figures, 7 tables, repository https://github.com/rumc3dlab/3dlandmarkdetection

    Dataset of prostate MRI annotated for anatomical zones and cancer

    Get PDF
    In the present work, we present a publicly available, expert-segmented representative dataset of 158 3.0 Tesla biparametric MRIs [1]. There is an increasing number of studies investigating prostate and prostate carcinoma segmentation using deep learning (DL) with 3D architectures [2], [3], [4], [5], [6], [7]. The development of robust and data-driven DL models for prostate segmentation and assessment is currently limited by the availability of openly available expert-annotated datasets [8], [9], [10]. The dataset contains 3.0 Tesla MRI images of the prostate of patients with suspected prostate cancer. Patients over 50 years of age who had a 3.0 Tesla MRI scan of the prostate that met PI-RADS version 2.1 technical standards were included. All patients received a subsequent biopsy or surgery so that the MRI diagnosis could be verified/matched with the histopathologic diagnosis. For patients who had undergone multiple MRIs, the last MRI, which was less than six months before biopsy/surgery, was included. All patients were examined at a German university hospital (CharitĂ© UniversitĂ€tsmedizin Berlin) between 02/2016 and 01/2020. All MRI were acquired with two 3.0 Tesla MRI scanners (Siemens VIDA and Skyra, Siemens Healthineers, Erlangen, Germany). Axial T2W sequences and axial diffusion-weighted sequences (DWI) with apparent diffusion coefficient maps (ADC) were included in the data set. T2W sequences and ADC maps were annotated by two board-certified radiologists with 6 and 8 years of experience, respectively. For T2W sequences, the central gland (central zone and transitional zone) and peripheral zone were segmented. If areas of suspected prostate cancer (PIRADS score of ≄ 4) were identified on examination, they were segmented in both the T2W sequences and ADC maps. Because restricted diffusion is best seen in DWI images with high b-values, only these images were selected and all images with low b-values were discarded. Data were then anonymized and converted to NIfTI (Neuroimaging Informatics Technology Initiative) format

    3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge

    Full text link
    Teeth localization, segmentation, and labeling from intra-oral 3D scans are essential tasks in modern dentistry to enhance dental diagnostics, treatment planning, and population-based studies on oral health. However, developing automated algorithms for teeth analysis presents significant challenges due to variations in dental anatomy, imaging protocols, and limited availability of publicly accessible data. To address these challenges, the 3DTeethSeg'22 challenge was organized in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022, with a call for algorithms tackling teeth localization, segmentation, and labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans from 900 patients was prepared, and each tooth was individually annotated by a human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this dataset. In this study, we present the evaluation results of the 3DTeethSeg'22 challenge. The 3DTeethSeg'22 challenge code can be accessed at: https://github.com/abenhamadou/3DTeethSeg22_challengeComment: 29 pages, MICCAI 2022 Singapore, Satellite Event, Challeng

    Optimizing efficiency in the creation of patient-specific plates through field-driven generative design in maxillofacial surgery

    No full text
    Abstract Field driven design is a novel approach that allows to define through equations geometrical entities known as implicit bodies. This technology does not rely upon conventional geometry subunits, such as polygons or edges, rather it represents spatial shapes through mathematical functions within a geometrical field. The advantages in terms of computational speed and automation are conspicuous, and well acknowledged in engineering, especially for lattice structures. Moreover, field-driven design amplifies the possibilities for generative design, facilitating the creation of shapes generated by the software on the basis of user-defined constraints. Given such potential, this paper suggests the possibility to use the software nTopology, which is currently the only software for field-driven generative design, in the context of patient-specific implant creation for maxillofacial surgery. Clinical scenarios of applicability, including trauma and orthognathic surgery, are discussed, as well as the integration of this new technology with current workflows of virtual surgical planning. This paper represents the first application of field-driven design in maxillofacial surgery and, although its results are very preliminary as it is limited in considering only the distance field elaborated from specific points of reconstructed anatomy, it introduces the importance of this new technology for the future of personalized implant design in surgery

    Skeletal structure of asymmetric mandibular prognathism and retrognathism

    No full text
    Abstract Background This study aimed to compare the skeletal structures between mandibular prognathism and retrognathism among patients with facial asymmetry. Results Patients who had mandibular asymmetry with retrognathism (Group A) in The Netherlands were compared with those with deviated mandibular prognathism (Group B) in Korea. All the data were obtained from 3D-reformatted cone-beam computed tomography images from each institute. The right and left condylar heads were located more posteriorly, inferiorly, and medially in Group B than in Group A. The deviated side of Group A and the contralateral side of Group B showed similar condylar width and height, ramus-proper height, and ramus height. Interestingly, there were no inter-group differences in the ramus-proper heights. Asymmetric mandibular body length was the most significantly correlated with chin asymmetry in retrognathic asymmetry patients whereas asymmetric elongation of condylar process was the most important factor for chin asymmetry in deviated mandibular prognathism. Conclusion Considering the 3D positional difference of gonion and large individual variations of frontal ramal inclination, significant structural deformation in deviated mandibular prognathism need to be considered in asymmetric prognathism patients. Therefore, Individually planned surgical procedures that also correct the malpositioning of the mandibular ramus are recommended especially in patients with asymmetric prognathism

    Three-Dimensional Analysis of the Condylar Hypoplasia and Facial Asymmetry in Craniofacial Microsomia Using Cone-Beam Computed Tomography

    No full text
    Purpose: To assess the condylar hypoplasia and its correlation with craniofacial deformities in adults with unilateral craniofacial microsomia (CFM). Methods: Pretreatment cone-beam computed tomography scans of consecutive adults (mean age: 20.4 ± 3.0 years; range: 17.3 to 31.4 years) with Pruzansky-Kaban type I and IIA CFM were reconstructed in 3D. Both condyles were segmented. Asymmetry ratios (affected side/contralateral side) of condylar volume were calculated to indicate the extent of condylar hypoplasia. 3D cephalometry was performed to quantify the maxillomandibular morphology and facial asymmetry. The correlations in between were assessed by using Pearson's or Spearman's correlation coefficients. Results: Thirty-six subjects were enrolled, consisting of 22 subjects with Pruzansky-Kaban type I and 14 subjects with type IIA. The condyles in type IIA group were significantly more hypoplastic in height (asymmetry ratio: 40.69 vs 59.95%, P = .006) and volume (18.16 vs 47.84%, P < .001) compared to type I group. Type IIA group had a significantly smaller SNB value than type I group (72.94° vs 77.41°, P = .012), and a significantly greater facial asymmetry (P < .05). The hypoplastic extent of condylar volume and Pruzansky-Kaban types were significantly correlated with SNB (r = 0.457 and ρ = -0.411, respectively), upper incisor deviation (r = -0.446 and ρ = 0.362), chin deviation (r = -0.477 and ρ = 0.527), upper occlusal plane cant (r = -0.672 and ρ = 0.631), and mandibular plane cant (r = -0.557 and ρ = 0.357, P < .05). Conclusion: For unilateral CFM adults, greater condylar hypoplasia in volume along with more severe mandibular retrusion and facial asymmetry objectively indicated a higher scale of Pruzansky-Kaban classification (type IIA). These quantitative distinctions are expected to enhance the diagnostic reliability of CFM

    Classification of caries in third molars on panoramic radiographs using deep learning

    Get PDF
    Abstract The objective of this study is to assess the classification accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the classification of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped PR(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.86, a specificity of 0.88 and an AUC of 0.90 for the classification of carious lesions of third molars on PR(s). A high accuracy was achieved in caries classification in third molars based on the MobileNet V2 algorithm as presented. This is beneficial for the further development of a deep-learning based automated third molar removal assessment in future

    Patients’ perspectives on the use of artificial intelligence in dentistry: a regional survey

    No full text
    Abstract The use of artificial intelligence (AI) in dentistry is rapidly evolving and could play a major role in a variety of dental fields. This study assessed patients’ perceptions and expectations regarding AI use in dentistry. An 18-item questionnaire survey focused on demographics, expectancy, accountability, trust, interaction, advantages and disadvantages was responded to by 330 patients; 265 completed questionnaires were included in this study. Frequencies and differences between age groups were analysed using a two-sided chi-squared or Fisher’s exact tests with Monte Carlo approximation. Patients’ perceived top three disadvantages of AI use in dentistry were (1) the impact on workforce needs (37.7%), (2) new challenges on doctor–patient relationships (36.2%) and (3) increased dental care costs (31.7%). Major expected advantages were improved diagnostic confidence (60.8%), time reduction (48.3%) and more personalised and evidencebased disease management (43.0%). Most patients expected AI to be part of the dental workflow in 1–5 (42.3%) or 5–10 (46.8%) years. Older patients (> 35 years) expected higher AI performance standards than younger patients (18–35 years) (p < 0.05). Overall, patients showed a positive attitude towards AI in dentistry. Understanding patients’ perceptions may allow professionals to shape AI-driven dentistry in the future

    Reliability and Agreement of 3D Anthropometric Measurements in Facial Palsy Patients Using a Low-Cost 4D Imaging System

    No full text
    The reliability (precision) and agreement (accuracy) of anthropometric measurements based on manually placed 3D landmarks using the RealSense D415 were investigated in this paper. Thirty facial palsy patients, with their face in neutral (resting) position, were recorded simultaneously with the RealSense and a professional 3dMD imaging system. First the RealSense depth accuracy was determined. Subsequently, two observers placed 14 facial landmarks on the 3dMD and RealSense image, assessing the distance between landmark placement. The respective intra- and inter-rater Euclidean distance between the landmark placements was 0.84 mm (±0.58) and 1.00 mm (±0.70) for the 3dMD landmarks and 1.32 mm (±1.27) and 1.62 mm (±1.42) for the RealSense landmarks. From these landmarks 14 anthropometric measurements were derived. The intra- and inter-rater measurements had an overall reliability of 0.95 (0.87 - 0.98) and 0.93 (0.85 - 0.97) for the 3dMD measurements, and 0.83 (0.70 - 0.91) and 0.80 (0.64 - 0.89) for the RealSense measurements, respectively, expressed as the intra-class correlation coefficient. Determined by the Bland-Altman analysis, the agreement between the RealSense measurements and 3dMD measurements was on average -0.90 mm (-4.04 - 2.24) and -0.89 mm (-4.65 - 2.86) for intra- and inter-rater agreement, respectively. Based on the reported reliability and agreement of the RealSense measurements, the RealSense D415 can be considered as a viable option to perform objective 3D anthropomorphic measurements on the face in a neutral position, where a low-cost and portable camera is required
    corecore